Goto

Collaborating Authors

 run ai


Want to run AI on your PC? You're gonna need a bigger hard drive

PCWorld

When people talk about the "size" of an AI model, they're referring to the number of "parameters" it contains. A parameter is one variable in the AI model that determines how it generates output, and any given AI model can have billions of these parameters. Also referred to as model weights, these parameters occupy storage space to operate properly -- and when an AI model has billions of parameters, storage requirements can quickly balloon. As you can see, the storage space consumed by an LLM increases with the size of its parameters. The same is true for other types of generative AI models, too.


A Brain-Inspired Chip Can Run AI With Far Less Energy

#artificialintelligence

Artificial intelligence algorithms cannot keep growing at their current pace. Algorithms like deep neural networks -- which are loosely inspired by the brain, with multiple layers of artificial neurons linked to each other via numerical values called weights -- get bigger every year. But these days, hardware improvements are no longer keeping pace with the enormous amount of memory and processing capacity required to run these massive algorithms. Soon, the size of AI algorithms may hit a wall. And even if we could keep scaling up hardware to meet the demands of AI, there's another problem: running them on traditional computers wastes an enormous amount of energy.


The MLops company making it easier to run AI workloads across hybrid clouds

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! There is no shortage of options for organizations seeking places in the cloud, or on-premises to deploy and run machine learning and artificial intelligence (AI) workloads. A key challenge for many though is figuring out how to orchestrate those workloads across multi-cloud and hybrid-cloud environments. Today, AI compute orchestration vendor Run AI is announcing an update to its Atlas Platform that is designed to make it easier for data scientists to deploy, run and manage machine learning workloads across different deployment targets including cloud providers and on-premises environments.